后处理整体预测系统可以改善天气预报,尤其是对于极端事件预测。近年来,已经开发出不同的机器学习模型来提高后处理步骤的质量。但是,这些模型在很大程度上依赖数据并生成此类合奏成员需要以高计算成本的数值天气预测模型进行多次运行。本文介绍了ENS-10数据集,由十个合奏成员组成,分布在20年中(1998-2017)。合奏成员是通过扰动数值天气模拟来捕获地球的混乱行为而产生的。为了代表大气的三维状态,ENS-10在11个不同的压力水平以及0.5度分辨率的表面中提供了最相关的大气变量。该数据集以48小时的交货时间针对预测校正任务,这实质上是通过消除合奏成员的偏见来改善预测质量。为此,ENS-10为预测交货时间t = 0、24和48小时(每周两个数据点)提供了天气变量。我们在ENS-10上为此任务提供了一组基线,并比较了它们在纠正不同天气变量预测时的性能。我们还评估了使用数据集预测极端事件的基准。 ENS-10数据集可在创意共享归因4.0国际(CC By 4.0)许可下获得。
translated by 谷歌翻译
尽管有持续的改进,但降水预测仍然没有其他气象变量的准确和可靠。造成这种情况的一个主要因素是,几个影响降水分布和强度的关键过程出现在全球天气模型的解决规模以下。计算机视觉社区已经证明了生成的对抗网络(GAN)在超分辨率问题上取得了成功,即学习为粗图像添加精细的结构。 Leinonen等。 (2020年)先前使用GAN来产生重建的高分辨率大气场的集合,并给定较粗糙的输入数据。在本文中,我们证明了这种方法可以扩展到更具挑战性的问题,即通过使用高分辨率雷达测量值作为“地面真相”来提高天气预报模型中相对低分辨率输入的准确性和分辨率。神经网络必须学会添加分辨率和结构,同时考虑不可忽略的预测错误。我们表明,甘斯和vae-gan可以在创建高分辨率的空间相干降水图的同时,可以匹配最新的后处理方法的统计特性。我们的模型比较比较与像素和合并的CRP分数,功率谱信息和等级直方图(用于评估校准)的最佳现有缩减方法。我们测试了我们的模型,并表明它们在各种场景中的表现,包括大雨。
translated by 谷歌翻译
我们可以通过机器学习(ml)改善城市陆地面积的建模吗?在预测所有常见表面通量的情况下,城市陆地表面模型(ULSMS)的比较发现,没有单一模型是“最好”。在这里,我们开发了一个城市神经网络(UNN),在一个网站上的22个ULSMS的平均预测助焊剂训练。UNN准确地模拟ULSMS的平均输出。与参考ulsm(城镇能量平衡; TEB)相比,UNN相对于通量观察,计算成本较少,并且需要较少的输入参数具有更高的准确性。当使用TensoRFlow绑定耦合到天气研究预测(WRF)模型时,WRF-UNN比参考WRF-TEB稳定,更准确。虽然申请目前受到培训数据(1个网站)的限制,但我们展示了一种新的方法来通过将几个ULSMS的强度与使用ML的强度组合成一个方法来改善表面助熔剂的建模。
translated by 谷歌翻译
概率预测包括基于过去观察的未来结果的概率分布组成。在气象中,运行基于物理的数值模型的集合以获得此类分发。通常,使用评分规则,预测分配的功能和观察结果进行评估。通过一些评分规则,可以同时评估预测的校准和清晰度。在深度学习中,生成神经网络参数化在高维空间上的分布,并通过从潜变量转换绘制来轻松允许采样。条件生成网络另外限制输入变量上的分布。在此稿件中,我们使用培训的条件生成网络执行概率预测,以最小化评分规则值。与生成的对抗网络(GANS)相比,不需要鉴别者,培训是稳定的。我们对两种混沌模型进行实验和天气观测的全球数据集;结果令人满意,更好地校准而不是由GANS实现的。
translated by 谷歌翻译
在数值天气和气候模型中的云结构的处理通常很大程度上是大大简化的,以使它们计算得起价格实惠。在这里,我们建议使用计算廉价的神经网络来纠正欧洲的中等天气预报1D辐射方案ECRAD,用于3D云效应。 3D云效应被学习为ECRAD快速1D Tripleclouds疏忽它们的差异及其3D Spartacus(通过云侧辐射传输的快速算法),其中包括它们的求解器,但大约是计算昂贵的五倍。在3D信号的20到30%之间的典型误差,神经网络的准确性提高了运行时增加约1%。因此,而不是模仿整个斯巴达斯,我们将Tripleclouds保持不变的气氛的无云部分和在其他地方的3D矫正它。如果我们假设两者的相似的信噪比,则对相对小的3D校正而不是整个信号的焦点允许显着提高预测。
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
This paper presents a machine learning approach to multidimensional item response theory (MIRT), a class of latent factor models that can be used to model and predict student performance from observed assessment data. Inspired by collaborative filtering, we define a general class of models that includes many MIRT models. We discuss the use of penalized joint maximum likelihood (JML) to estimate individual models and cross-validation to select the best performing model. This model evaluation process can be optimized using batching techniques, such that even sparse large-scale data can be analyzed efficiently. We illustrate our approach with simulated and real data, including an example from a massive open online course (MOOC). The high-dimensional model fit to this large and sparse dataset does not lend itself well to traditional methods of factor interpretation. By analogy to recommender-system applications, we propose an alternative "validation" of the factor model, using auxiliary information about the popularity of items consulted during an open-book exam in the course.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
The celebrated FedAvg algorithm of McMahan et al. (2017) is based on three components: client sampling (CS), data sampling (DS) and local training (LT). While the first two are reasonably well understood, the third component, whose role is to reduce the number of communication rounds needed to train the model, resisted all attempts at a satisfactory theoretical explanation. Malinovsky et al. (2022) identified four distinct generations of LT methods based on the quality of the provided theoretical communication complexity guarantees. Despite a lot of progress in this area, none of the existing works were able to show that it is theoretically better to employ multiple local gradient-type steps (i.e., to engage in LT) than to rely on a single local gradient-type step only in the important heterogeneous data regime. In a recent breakthrough embodied in their ProxSkip method and its theoretical analysis, Mishchenko et al. (2022) showed that LT indeed leads to provable communication acceleration for arbitrarily heterogeneous data, thus jump-starting the $5^{\rm th}$ generation of LT methods. However, while these latest generation LT methods are compatible with DS, none of them support CS. We resolve this open problem in the affirmative. In order to do so, we had to base our algorithmic development on new algorithmic and theoretical foundations.
translated by 谷歌翻译
Graph clustering is a fundamental problem in unsupervised learning, with numerous applications in computer science and in analysing real-world data. In many real-world applications, we find that the clusters have a significant high-level structure. This is often overlooked in the design and analysis of graph clustering algorithms which make strong simplifying assumptions about the structure of the graph. This thesis addresses the natural question of whether the structure of clusters can be learned efficiently and describes four new algorithmic results for learning such structure in graphs and hypergraphs. All of the presented theoretical results are extensively evaluated on both synthetic and real-word datasets of different domains, including image classification and segmentation, migration networks, co-authorship networks, and natural language processing. These experimental results demonstrate that the newly developed algorithms are practical, effective, and immediately applicable for learning the structure of clusters in real-world data.
translated by 谷歌翻译